4 - Principles of Programming Languages [ID:11057]
50 von 983 angezeigt

So, I talked a little bit about shared memory programming, parallel programming, channels

that came out last week, about Ada, Ada, John Devoe, and we'll continue on with that trend.

So we had shared variables, shared memory that was easy to use, but there were race

conditions, there were deadlocks, livelocks in programming, shared memory, parallel programming.

So it's conceptually easy, but its implementation is tricky.

So we'll talk about ways of getting around some of these problems.

There was a remote procedure calls when you have different machines.

The problem with distributed machines with remote procedure call is it's different from

shared memory programming, so it's a little trickier conceptually to handle.

Then there are of course mixtures.

Mixtures where you're mixing shared memory programming with distributed memory programming

to get a sort of hybrid programming model where you as a programmer have more opportunities

for optimization.

That's the whole idea for hybrid shared memory, distributed memory programming.

If you go completely shared memory even for a distributed memory machine where you're

hiding effectively distributed nature of your cluster behind some shared memory abstraction,

then the system, the underlying system is responsible for all the performance.

And most of the time such a runtime system or a compiler cannot fully optimize your program.

If you go to a fully distributed memory machine with a distributed memory programming model,

then all of the optimization is in your hands and the compiler and the runtime systems don't

have to do anything at all.

Again, not optimal.

You want the compiler and the programming language to handle some of the burden of either optimization

or debugability or correctness for you.

So it makes sense to have some sort of mixture between shared memory and distributed memory

programming.

We're looking at some of these models in a bit.

So the first idea is if you have shared memory or if you have distributed memory, there are

this all fairly low level.

You want to lift the amount of abstraction a bit and this is where Linda comes in, in

tuple spaces.

The idea here is to program your program like if it were a database.

So instead of having shared variables, you have records in a database that you manipulate.

For consistency, you are only allowed the system guarantees you that when you retrieve a tuple

from the database, it will be atomically extracted from the database.

The system itself can handle either shared memory or distributed memory.

If you have a distributed memory machine, so you have multiple individual machines, the

internally, conceptually, the database is partitioned over all the machines.

So if in one machine you do a get from database, you retrieve a tuple from the database, a

record from the database, then it will be removed from whatever machine holds the record

and you get a copy.

Then you manipulate that record and you place it back into the database.

So the database is an abstraction over memory.

So there are bindings for this tuple space on top of various programming languages,

for example C, C++, Pascal, Java, Prologue and so forth.

So the number of things that we add to the language for Linda support are only three

or four things.

Out tuple to add something to the database, in to retrieve something from the database

and read to see if something is in the database.

There's also an eval thing, but we don't care about eval for now.

Zugänglich über

Offener Zugang

Dauer

01:19:56 Min

Aufnahmedatum

2013-06-12

Hochgeladen am

2019-05-09 17:29:02

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen